28 research outputs found

    Parametric shape optimization for combined additive–subtractive manufacturing

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s11837-019-03886-xIn industrial practice, additive manufacturing (AM) processes are often followed by post-processing operations such as heat treatment, subtractive machining, milling, etc., to achieve the desired surface quality and dimensional accuracy. Hence, a given part must be 3D-printed with extra material to enable this finishing phase. This combined additive/subtractive technique can be optimized to reduce manufacturing costs by saving printing time and reducing material and energy usage. In this work, a numerical methodology based on parametric shape optimization is proposed for optimizing the thickness of the extra material, allowing for minimal machining operations while ensuring the finishing requirements. Moreover, the proposed approach is complemented by a novel algorithm for generating inner structures to reduce the part distortion and its weight. The computational effort induced by classical constrained optimization methods is alleviated by replacing both the objective and constraint functions by their sparse grid surrogates. Numerical results showcase the effectiveness of the proposed approach.Peer ReviewedPostprint (published version

    Effiziente Selbstschattierung in Szenen mit bildbasierter Beleuchtungsinformation und glänzenden Materialien

    No full text
    Beim Design und Virtual Prototyping von Produkten werden heutzutage oftmals interaktive Computersimulationen eingesetzt. Um die Vorstellungen des Designers möglichst realitätsnah wiedergeben zu können, müssen diese Simulatoren über eine qualitativ hochwertige und plausible Beleuchtung verfügen. Hinzu kommt eine möglichst hohe Flexibilität bezüglich der Gestaltung der Oberflächen und der Wahl der verwendeten Materialien. In den meisten Fällen ermöglichen die verwendeten Simulatoren zusätzlich die direkte Manipulation der Geometrie oder Anordnung der Objekte durch den Benutzer. Es gibt bereits Verfahren, die solche dynamischen Szenen mit bildbasierten Beleuchtungsinformationen in Echtzeit beleuchten und Schattenwürfe berechnen können; die so genannten "Image Based Directional Occlusion" Verfahren (IBDO). Allerdings sind sie bezüglich der Materialien der zu beleuchtenden Objekte sehr eingeschränkt: Sie unterstützen nur diffuse und leicht glänzende Materialien; für hoch glänzende und spiegelnde Oberflächen sind sie nicht geeignet. In dieser Arbeit wird ein neues Beleuchtungssystem vorgestellt, das ähnlich wie die IBDO Algorithmen viele Lichtquellen aus einer Environment Map erzeugt, diese dann jedoch nicht direkt zur Beleuchtung der Szene, sondern lediglich zum Sammeln der Verdeckungsinformationen verwendet. Für die eigentliche Beleuchtung werden wie beim Environment Mapping die Farbwerte über einen oberflächenabhängigen Sample-Vektor aus der Environment Map (EM) ausgelesen. Die über die Lichtquellen gesammelten Verdeckungsinformationen in Form von Variance Shadow Maps (VSMs) werden dazu eingesetzt, nicht sichtbare Teile der EM dynamisch auszublenden. Die geforderte Flexibilität in der Materialwahl wird durch verschiedene MipMap Stufen der EM gewährleistet. Das Beleuchtungssystem verbindet somit die freie Materialwahl des Environment Mappings mit der bildbasierten Schattenberechnung der IBDO Verfahren. Die verwendeten VSMs bieten einen weichen Schattenwurf bei vergleichsweise geringem Rechenaufwand, verlieren aber bei niedrigen Auflösungen wichtige Details bei der Selbstschattierung. Die Steuerung der Reflektivität der Objekte über MipMaps erlaubt einen fließenden Übergang von 100% spiegelnd bis 100% diffus. Zusätzlich können Algorithmen zur Manipulation der Oberflächengeometrie, wie beispielsweise Bump-, Normalund Displacement-Mapping sowie Tesselierung, problemlos vorgelagert werden; das Beleuchtungssystem ist kompatibel. Der entscheidende Faktor für die Performance des Verfahrens ist die Zahl der verwendeten Lichtquellen. Für angemessene Mengen (100 bis 200) liefert es auf aktueller Mittelklasse-Hardware (z.B. AMD Radeon HD6850 oder NVIDIA Geforce GTX 280) interaktive Bildraten. Die hohe Zahl der benötigten Texturen und die damit verbundene Menge an Grafikspeicher sowie die zwanghafte Aufteilung des Renderings in mehrere Render-Passes stellen jedoch einen Nachteil gegenüber den oben erwähnten IBDO Verfahren dar

    Effiziente Selbstschattierung in Szenen mit bildbasierter Beleuchtungsinformation und glänzenden Materialien

    No full text
    Beim Design und Virtual Prototyping von Produkten werden heutzutage oftmals interaktive Computersimulationen eingesetzt. Um die Vorstellungen des Designers möglichst realitätsnah wiedergeben zu können, müssen diese Simulatoren über eine qualitativ hochwertige und plausible Beleuchtung verfügen. Hinzu kommt eine möglichst hohe Flexibilität bezüglich der Gestaltung der Oberflächen und der Wahl der verwendeten Materialien. In den meisten Fällen ermöglichen die verwendeten Simulatoren zusätzlich die direkte Manipulation der Geometrie oder Anordnung der Objekte durch den Benutzer. Es gibt bereits Verfahren, die solche dynamischen Szenen mit bildbasierten Beleuchtungsinformationen in Echtzeit beleuchten und Schattenwürfe berechnen können; die so genannten "Image Based Directional Occlusion" Verfahren (IBDO). Allerdings sind sie bezüglich der Materialien der zu beleuchtenden Objekte sehr eingeschränkt: Sie unterstützen nur diffuse und leicht glänzende Materialien; für hoch glänzende und spiegelnde Oberflächen sind sie nicht geeignet. In dieser Arbeit wird ein neues Beleuchtungssystem vorgestellt, das ähnlich wie die IBDO Algorithmen viele Lichtquellen aus einer Environment Map erzeugt, diese dann jedoch nicht direkt zur Beleuchtung der Szene, sondern lediglich zum Sammeln der Verdeckungsinformationen verwendet. Für die eigentliche Beleuchtung werden wie beim Environment Mapping die Farbwerte über einen oberflächenabhängigen Sample- Vektor aus der Environment Map (EM) ausgelesen. Die über die Lichtquellen gesammelten Verdeckungsinformationen in Form von Variance Shadow Maps (VSMs) werden dazu eingesetzt, nicht sichtbare Teile der EM dynamisch auszublenden. Die geforderte Flexibilität in der Materialwahl wird durch verschiedene MipMap Stufen der EM gewährleistet. Das Beleuchtungssystem verbindet somit die freie Materialwahl des Environment Mappings mit der bildbasierten Schattenberechnung der IBDO Verfahren. Die verwendeten VSMs bieten einen weichen Schattenwurf bei vergleichsweise geringem Rechenaufwand, verlieren aber bei niedrigen Auflösungen wichtige Details bei der Selbstschattierung. Die Steuerung der Reflektivität der Objekte über MipMaps erlaubt einen fließenden Übergang von 100 spiegelnd bis 100 diffus. Zusätzlich können Algorithmen zur Manipulation der Oberflächengeometrie, wie beispielsweise Bump-, Normalund Displacement-Mapping sowie Tesselierung, problemlos vorgelagert werden; das Beleuchtungssystem ist kompatibel. Der entscheidende Faktor für die Performance des Verfahrens ist die Zahl der verwendeten Lichtquellen. Für angemessene Mengen (100 bis 200) liefert es auf aktueller Mittelklasse-Hardware (z.B. AMD Radeon HD6850 oder NVIDIA Geforce GTX 280) interaktive Bildraten. Die hohe Zahl der benötigten Texturen und die damit verbundene Menge an Grafikspeicher sowie die zwanghafte Aufteilung des Renderings in mehrere Render-Passes stellen jedoch einen Nachteil gegenüber den oben erwähnten IBDO Verfahren dar

    Volumetric Subdivision for Efficient Integrated Modeling and Simulation

    No full text
    Continuous surface representations, such as B-spline and Non-Uniform Rational B-spline (NURBS) surfaces are the de facto standard for modeling 3D objects - thin shells and solid objects alike - in the field of Computer-Aided Design (CAD). For performing physically based simulation, Finite Element Analysis (FEA) has been the industry standard for many years. In order to analyze physical properties such as stability, aerodynamics, or heat dissipation, the continuous models are discretized into finite element (FE) meshes. A tight integration of and a smooth transition between geometric design and physically based simulation are key factors for an efficient design and engineering workflow. Converting a CAD model from its continuous boundary representation (B-Rep) into a discrete volumetric representation for simulation is a time-consuming process that introduces approximation errors and often requires manual interaction by the engineer. Deriving design changes directly from the simulation results is especially difficult as the meshing process is irreversible. Isogeometric Analysis (IGA) tries to overcome this meshing hurdle by using the same representation for describing the geometry and for performing the simulation. Most commonly, IGA is performed on bivariate and trivariate spline representations (B-spline or NURBS surfaces and volumes). While existing CAD B-Rep models can be used directly for simulating thin-shell objects, simulating solid objects requires a conversion from spline surfaces to spline volumes. As spline volumes need a trivariate tensor-product topology, complex 3D objects must be represented via trimming or by connecting multiple spline volumes, limiting the continuity to C^0. As an alternative to NURBS or B-splines, subdivision models allow for representing complex topologies with as a single entity, removing the need for trimming or tiling and potentially providing higher continuity. While subdivision surfaces have shown promising results for designing and simulating shells, IGA on subdivision volumes remained mostly unexplored apart from the work of Burkhart et al. In this dissertation, I investigate how volumetric subdivision representations are beneficial for a tighter integration of geometric modeling and physically based simulation. Focusing on Catmull-Clark (CC) solids, I present novel techniques in the areas of efficient limit evaluation, volumetric modeling, numerical integration, and mesh quality analysis. I present an efficient link to FEA, as well as my IGA approach on CC solids that improves upon Burkhart et al.'s proof of concept with constant-time limit evaluation, more accurate integration, and higher mesh quality. Efficient limit evaluation is a key requirement when working with subdivision models in geometric design, visualization, simulation, and 3D printing. In this dissertation, I present the first method for constant-time volumetric limit evaluation of CC solids. It is faster than the subdivision-based approach by Burkhart et al. for every topological constellation and parameter point that would require more than two local subdivision steps. Adapting the concepts of well-known surface modeling tools, I present a volumetric modeling environment for CC-solid control meshes. Consistent volumetric modeling operations built from a set of novel volumetric Euler operators allow for creating and modifying topologically consistent volumetric meshes. Furthermore, I show how to manipulate groups of control points via parameters, how to avoid intersections with inner control points while modeling the outer surface, and how to use CC solids in the context of multi-material additive manufacturing. For coupling of volumetric subdivision models with established FE frameworks, I present an efficient and consistent tetrahedral mesh generation technique for CC solids. The technique exploits the inherent volumetric structure of CC-solid models and is at least 26 times faster than the tetrahedral meshing algorithm provided by CGAL. This allows to re-create or update the tetrahedral mesh almost instantly when changing the CC-solid model. However, the mesh quality strongly depends on the quality of the control mesh. In the context of structural analysis, I present my IGA approach on CC solids. The IGA approach yields converging stimulation results for models with fewer elements and fewer degrees of freedom than FE simulations on tetrahedral meshes with linear and higher-order basis functions. The solver also requires fewer iterations to solve the linear system due to the higher continuity throughout the simulation model provided by the subdivision basis functions. Extending Burkhart et al.'s method, my hierarchical quadrature scheme for irregular CC-solid cells increases the accuracy of the integrals for computing surface areas and element stiffnesses. Furthermore, I introduce a quality metric that quantifies the parametrization quality of the limit volume, revealing distortions, inversions, and singularities. The metric shows that cells with multiple adjacent boundary faces induce singularities in the limit, even for geometrically well-shaped control meshes. Finally, I present a set of topological operations for splitting such boundary cells - resolving the singularities. These improvements further reduce the amount of elements required to obtain converging results as well as the time required for solving the linear system

    Executing cyclic scientific workflows in the cloud

    No full text
    We present an algorithm and a software architecture for a cloud-based system that executes cyclic scientific workflows whose structure may change during run time. Existing approaches either rely on workflow definitions based on directed acyclic graphs (DAGs) or require workarounds to implement cyclic structures. In contrast, our system supports cycles natively, avoids workarounds, and as such reduces the complexity of workflow modelling and maintenance. Our algorithm traverses workflow graphs and transforms them iteratively into linear sequences of executable actions. We call these sequences process chains. Our software architecture distributes the process chains to multiple compute nodes in the cloud and oversees their execution. We evaluate our approach by applying it to two practical use cases from the domains of astronomy and engineering. We also compare it with two existing workflow management systems. The evaluation demonstrates that our algorithm is able to execute dynamically changing workflows with cycles and that design and maintenance of complex workflows is easier than with existing solutions. It also shows that our software architecture can run process chains on multiple compute nodes in parallel to significantly speed up the workflow execution. An implementation of our algorithm and the software architecture is available with the Steep Workflow Management System that we released under an open-source license. The resources for the first practical use case are also available as open source for reproduction

    Volumetric subdivision for consistent implicit mesh generation

    No full text
    In this paper, we present a novel approach for a tighter integration of 3D modeling and physically- based simulation. Instead of modeling 3D objects as surface models, we use a volumetric subdivision representation. Volumetric modeling operations allow designing 3D objects in similar ways as with surface-based modeling tools, while automatic checks and modifications of inner control points ensure consistency during the design process. Encoding the volumetric information already in the design mesh drastically simplifies and speeds up the mesh generation process for simulation. The transition between design, simulation and back to design is consistent and computationally cheap. Since the subdivision and mesh generation can be expressed as a precomputable matrix-vector multiplication, iteration times can be greatly reduced compared to common modeling and simulation setups. Therefore, this approach is especially well suited for early-stage modeling or optimization use cases, where many geometric changes are made in a short time and their physical effect on the model has to be evaluated frequently. To test our approach, we created, simulated and adapted several 3D models. We measured and evaluated the timings for generating and applying the matrices for different subdivision levels. Additionally, we computed several characteristic factors for mesh quality and mesh consistency. For comparison, we analyzed the tetrahedral meshing functionality offered by CGAL for similar numbers of elements. For changing topology, our implicit meshing approach proves to be up to 70 times faster than creating the tetrahedral mesh only based on the outer surface. Without changing the topology and by precomputing the matrices, we achieve a speed-up of up to 2800, as all the required information is already available

    Analyzing and Improving the Parametrization Quality of Catmull-Clark Solids for Isogeometric Analysis

    No full text
    In the field of physically based simulation, high quality of the simulation model is crucial for the correctness of the simulation results and the performance of the simulation algorithm. When working with spline or subdivision models in the context of isogeometric analysis, the quality of the parametrization has to be considered in addition to the geometric quality of the control mesh. Following Cohen et al.'s concept of model quality in addition to mesh quality, we present a parametrization quality metric tailored for Catmull-Clark (CC) solids. It measures the quality of the limit volume based on a quality measure for conformal mappings, revealing local distortions and singularities. We present topological operations that resolve these singularities by splitting certain types of boundary cells that typically occur in interactively designed CC-solid models. The improved models provide higher parametrization quality that positively affects the simulation results without additional computational costs for the solver

    Efficient Self-Shadowing Using Image-Based Lighting on Glossy Surfaces

    No full text
    In this paper we present a novel natural illumination approach for real-time rasterization-based rendering with environment map-based high dynamic range lighting. Our approach allows to use all kinds of glossiness values for surfaces, ranging continuously from completely diffuse up to mirror-like glossiness. This is achieved by combining cosine-based diffuse, glossy and mirror reflection models in one single lighting model. We approximate this model by filter functions, which are applied to the environment map. This results in a fast, image-based lookup for the different glossiness values which gives our technique the high performance that is necessary for real-time rendering. In contrast to existing real-time rasterization-based natural illumination techniques, our method has the capability of handling high gloss surfaces with directional self-occlusion. While previous works exchange the environment map by virtual point light sources in the whole lighting and shadow computation, we keep the full image information of the environment map in the lighting process and only use virtual point light sources for the shadow computation. Our technique was developed for the usage in real-time virtual prototyping systems for garments since here typically a small scene is lit by a large environment which fulfills the requirements for imagebased lighting. In this application area high performance rendering techniques for dynamic scenes are essential since a physical simulation is usually running in parallel on the same machine. However, also other applications can benefit from our approach

    Personalized Visual-Interactive Music Classification

    No full text
    We present an interactive visual music classification tool that will allow users to automatically structure music collections in a personalized way. With our approach, users play an active role in an iterative process of building classification models, using different interactive interfaces for labeling songs. The interactive tool conflates interfaces for the detailed analysis at different granularities, i.e., audio features, music songs, as well as classification results at a glance. Interactive labeling is provided with three complementary interfaces, combining model-centered and human-centered labeling-support principles. A clean visual design of the individual interfaces depicts complex model characteristics for experts, and indicates our work-in-progress towards the abilities of non-experts. The result of a preliminary usage scenario shows that, with our system, hardly any knowledge about machine learning is needed to create classification models of high accuracy with less than 50 labels
    corecore